This paper explores the emerging role of Large Language Models (LLMs) like ChatGPT, Gemini, and Claude as AI assistants for information-seeking. Recent studies indicate LLMs allow more conversational querying compared to traditional keyword searches. Their ability to generate summaries and new text also differentiates LLMs. However, risks around bias, inaccuracies, and lack of proper citations necessitate caution. This paper examines how LLMs compare to existing practices across key domains of information access, service, and ethical issues. The analysis suggests that combining LLMs and traditional methods is advisable presently. LLMs introduce helpful capabilities but lack specialized expertise. Verifying LLM-generated content remains essential given the risks. Recommendations include prompt optimization, training improvements, disclaimers on limitations, and transparency around data use. With responsible design and policies, LLMs could enhance certain information-seeking tasks but should not fully replace diligent human research activities at this stage. More research is needed on realizing benefits while mitigating pitfalls.